Accelerated Dual Averaging Methods for Decentralized Constrained Optimization
نویسندگان
چکیده
In this article, we study decentralized convex constrained optimization problems in networks. We focus on the dual averaging-based algorithmic framework that is well-documented to be superior handling constraints and complex communication environments simultaneously. Two new averaging (DDA) algorithms are proposed. first one, a second-order dynamic average consensus protocol tailored for DDA-type algorithms, which equips each agent with provably more accurate estimate of global variable than conventional schemes. rigorously prove proposed algorithm attains $\mathcal {O}(1/t)$ convergence general smooth problems, existing DDA methods were only known converge at {O}(1/\sqrt{t})$ prior our work. second use extrapolation technique accelerate DDA. Compared accelerated where typically two different variables exchanged among agents time, seeks local gradients. Then, performed based sequences primal variables, determined by accumulations gradients consecutive time instants, respectively. The proved {O}(1)\left(\frac{1}{t^{2}}+\frac{1}{t(1-\beta)^{2}}\right)$ , notation="LaTeX">$\beta$ denotes largest singular value mixing matrix. remark condition parameter guarantee does not rely spectrum matrix, making itself easy satisfy practice. Finally, numerical results presented demonstrate efficiency methods.
منابع مشابه
Gossip Dual Averaging for Decentralized Optimization of Pairwise Functions
In decentralized networks (of sensors, connected objects, etc.), there is an important need for efficient algorithms to optimize a global cost function, for instance to learn a global model from the local data collected by each computing unit. In this paper, we address the problem of decentralized minimization of pairwise functions of the data points, where these points are distributed over the...
متن کاملRandom Walk Distributed Dual Averaging Method For Decentralized Consensus Optimization
In this paper, we address the problem of distributed learning over a decentralized network, arising from scenarios including distributed sensors or geographically separated data centers. We propose a fully distributed algorithm called random walk distributed dual averaging (RW-DDA) that only requires local updates. Our RW-DDA method, improves the existing distributed dual averaging (DDA) method...
متن کاملRandom Walk Distributed Dual Averaging Method For Decentralized Consensus Optimization
In this paper, we address the problem of distributed learning over a large number of distributed sensors or geographically separated data centers, which suffer from sampling biases across nodes. We propose an algorithm called random walk distributed dual averaging (RW-DDA) method that only requires local updates and is fully distributed. Our RW-DDA method is robust to the change in network topo...
متن کاملPrimal–dual Methods for Nonlinear Constrained Optimization
. . . If a function of several variables should be a maximum or minimum and there are between these variables one or several equations, then it will be suffice to add to the proposed function the functions that should be zero, each multiplied by an undetermined quantity, and then to look for the maximum and the minimum as if the variables were independent; the equation that one will find combin...
متن کاملAccelerated Primal-Dual Proximal Block Coordinate Updating Methods for Constrained Convex Optimization
Block Coordinate Update (BCU) methods enjoy low per-update computational complexitybecause every time only one or a few block variables would need to be updated among possiblya large number of blocks. They are also easily parallelized and thus have been particularlypopular for solving problems involving large-scale dataset and/or variables. In this paper, wepropose a primal-...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: IEEE Transactions on Automatic Control
سال: 2023
ISSN: ['0018-9286', '1558-2523', '2334-3303']
DOI: https://doi.org/10.1109/tac.2022.3173062